OpenAI
Do you want to experiment with OpenAI models on Weave without any set up? Try the LLM Playground.
Tracing
It’s important to store traces of LLM applications in a central database, both during development and in production. You’ll use these traces for debugging and to help build a dataset of tricky examples to evaluate against while improving your application. Weave can automatically capture traces for the openai python library. Start capturing by callingweave.init(<project-name>)
with a project name your choice.
We capture the function calling tools for OpenAI Functions and OpenAI Assistants as well.
Structured Outputs
Weave also supports structured outputs with OpenAI. This is useful for ensuring that your LLM responses follow a specific format.Async Support
Weave also supports async functions for OpenAI.Streaming Support
Weave also supports streaming responses from OpenAI.Tracing Function Calls
Weave also traces function calls made by OpenAI when using tools.Logging additional data
You can log additional data to your traces by using theweave.log
function.
Batch API
Weave also supports the OpenAI Batch API for processing multiple requests.Assistants API
Weave also supports the OpenAI Assistants API for building conversational AI applications.Cost Tracking
Weave automatically tracks the cost of your OpenAI API calls. You can view the cost breakdown in the Weave UI.Cost tracking is available for all OpenAI models and is calculated based on the latest OpenAI pricing.
Tracing Custom Functions
You can also trace custom functions that use OpenAI by using the@weave.op
decorator.
Next Steps
Now that you’ve set up tracing for OpenAI, you can:- View traces in the Weave UI: Go to your Weave project to see traces of your OpenAI calls
- Create evaluations: Use your traces to build evaluation datasets
- Monitor performance: Track latency, costs, and other metrics
- Debug issues: Use traces to understand what’s happening in your LLM application